# VoxPopuli corpus

Wav2vec2 Base 10k Voxpopuli
A foundational speech recognition model pretrained on 10,000 hours of unlabeled data from the VoxPopuli corpus, supporting multilingual speech processing
Speech Recognition Transformers Other
W
facebook
2,504
0
Wav2vec2 Base Hr Voxpopuli V2
Speech model based on Facebook's Wav2Vec2 architecture, pre-trained on the Croatian VoxPopuli corpus
Speech Recognition Transformers Other
W
facebook
30
1
Wav2vec2 Base Es Voxpopuli
Wav2Vec2 speech recognition base model pre-trained on unlabeled Spanish data from VoxPopuli
Speech Recognition Transformers Spanish
W
facebook
39
2
Wav2vec2 Base Pt Voxpopuli V2
Wav2Vec2 base model pretrained on Portuguese VoxPopuli corpus, suitable for speech recognition tasks
Speech Recognition Transformers Other
W
facebook
30
0
Wav2vec2 Base Sv Voxpopuli
A Wav2Vec2 base model pretrained on the Swedish subset of the VoxPopuli corpus, suitable for Swedish speech recognition tasks.
Speech Recognition Transformers Other
W
facebook
33
0
Wav2vec2 Base Sk Voxpopuli V2
Wav2Vec2 base model pretrained on Slovak data from the VoxPopuli corpus, suitable for speech recognition tasks.
Speech Recognition Transformers Other
W
facebook
31
0
Wav2vec2 Base Et Voxpopuli V2
A speech model based on Facebook's Wav2Vec2 framework, specifically pretrained for Estonian
Speech Recognition Transformers Other
W
facebook
30
0
Wav2vec2 Base Cs Voxpopuli V2
Wav2Vec2 base model pretrained on the VoxPopuli corpus, specialized for Czech speech processing
Speech Recognition Transformers Other
W
facebook
33
1
Wav2vec2 Base Da Voxpopuli V2
A speech model based on Facebook's Wav2Vec2 architecture, specifically pre-trained for Danish using 13.6k unlabeled data from the VoxPopuli corpus.
Speech Recognition Transformers Other
W
facebook
35
0
Wav2vec2 Base It Voxpopuli
Wav2Vec2 base model pretrained on unlabeled Italian data from VoxPopuli, suitable for speech recognition tasks.
Speech Recognition Transformers Other
W
facebook
32
0
Wav2vec2 Base Bg Voxpopuli V2
A speech model based on Facebook's Wav2Vec2 architecture, specifically pretrained for Bulgarian language, suitable for speech recognition tasks.
Speech Recognition Transformers Other
W
facebook
30
0
Wav2vec2 Base Fr Voxpopuli V2
Facebook's Wav2Vec2 base model, pre-trained exclusively on French using 22.8k unlabeled data from the VoxPopuli corpus.
Speech Recognition Transformers French
W
facebook
103
1
Wav2vec2 Base Fr Voxpopuli
Wav2Vec2 base model pre-trained on unannotated French data from VoxPopuli, suitable for French speech recognition tasks
Speech Recognition Transformers French
W
facebook
30
0
Wav2vec2 Base 100k Voxpopuli
A speech recognition base model pretrained on 100,000 hours of unannotated data from the VoxPopuli corpus
Speech Recognition Transformers Other
W
facebook
148
4
Wav2vec2 Large 100k Voxpopuli
A speech recognition model pre-trained on 100,000 hours of unlabeled data from the VoxPopuli corpus, supporting multilingual speech representation learning
Speech Recognition Other
W
facebook
2,218
4
Wav2vec2 Large West Germanic Voxpopuli V2
Facebook's Wav2Vec2 large model, pretrained exclusively on 66.3 hours of unlabeled data from the West Germanic VoxPopuli corpus.
Speech Recognition Transformers
W
facebook
25
1
Wav2vec2 Large El Voxpopuli V2
Greek speech recognition model pretrained on VoxPopuli corpus using 17.7 hours of unlabeled data
Speech Recognition Transformers Other
W
facebook
24
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase